ImpactMojo
Premium

Research Design Worksheet

Planning Rigorous Impact Evaluations for Development Programs

📋 How to Use This Worksheet

This worksheet guides you through designing a credible impact evaluation from start to finish. Work through each section systematically, building your research design step by step. The completed worksheet will serve as your evaluation blueprint.

Best Practices:

1 Program and Research Question Definition

Program Description

Describe the development program or intervention you want to evaluate:

Primary Research Question

State your main causal question clearly and specifically:

Theory of Change

Map out the causal pathway from intervention to outcomes:

Policy Relevance

Why does this question matter for policy and practice?

2 Outcome Measurement Strategy

Primary Outcomes

List 1-3 most important outcomes and how you'll measure them:

Secondary Outcomes

Additional outcomes of interest (spillovers, unintended effects, mechanisms):
✅ Outcome Quality Checklist
Specific: Outcomes are precisely defined and measurable
Relevant: Outcomes directly relate to program objectives
Feasible: Data collection is realistic given budget and timeline
Pre-specified: Outcomes defined before seeing any results
Balanced: Include both intended and potential unintended effects

3 Identification Strategy Selection

Assignment Mechanism

How are participants selected for the program?
🎲 Randomized Trial

Random assignment to treatment

🔧 Instrumental Variables

Valid instrument available

📈 Difference-in-Differences

Policy change over time

📏 Regression Discontinuity

Threshold-based assignment

📊 Matching/Regression

Selection on observables

Method Justification

Why is your chosen method the most credible for this context?

Key Identifying Assumption

State the critical assumption needed for causal interpretation:

4 Threats to Validity Assessment

🎯 Selection Bias
How might treated and control groups differ systematically?
🔄 Spillover Effects
Could treatment affect control group outcomes?
📊 Attrition/Missing Data
Who might drop out and why?
⚡ External Validity
How generalizable are results?

Mitigation Strategies

How will you address the highest-threat validity concerns?

5 Sample Size and Power Analysis

Minimum Detectable Effect (MDE)

What's the smallest effect size that would be policy-relevant?

Power Calculation Parameters

Fill in your power analysis assumptions:
Parameter Value Justification
Power (1-β) ____ Typically 0.80 or 0.90
Significance (α) ____ Usually 0.05
Baseline Mean ____ From pilot data or literature
Baseline SD ____ From pilot data or literature
R² (covariates) ____ Variance explained by controls

Required Sample Size

Based on power calculation:
💡 Power Analysis Tips
  • Use conservative assumptions—err on the side of larger samples
  • Account for clustering if randomization is at group level
  • Adjust for expected attrition rates
  • Consider multiple hypothesis testing if many outcomes
  • Use pilot data or similar studies for parameter estimates

6 Data Collection Strategy

Baseline Data Collection

When and how will you collect pre-treatment data?

Follow-up Timeline

When will you measure outcomes?
Time Point Timing After Treatment Key Outcomes Rationale
Baseline Before treatment All baseline variables Control for pre-treatment differences
Follow-up 1 ____ ____ ____
Follow-up 2 ____ ____ ____

Data Sources

What data will you collect and from where?
✅ Data Quality Checklist
Reliable: Measurement instruments are validated and consistent
Valid: Measures capture the intended concepts
Timely: Data collection timing aligns with expected treatment effects
Comprehensive: Covers both intended and unintended outcomes
Ethical: Data collection respects participant privacy and consent

7 Analysis Plan

Primary Specification

Write out your main regression equation:

Control Variables

What variables will you include and why?

Robustness Checks

How will you test the sensitivity of your results?

Heterogeneity Analysis

For which subgroups might effects differ?
⚠️ Pre-Registration Requirements

Complete and register your analysis plan BEFORE seeing any outcome data. This prevents data mining and ensures credible results. Include:

  • Primary and secondary outcome definitions
  • Exact specification for main analysis
  • Predetermined subgroup analyses
  • Multiple testing corrections

8 Implementation and Timeline

Project Timeline

Map out key milestones:
Phase Duration Key Activities Deliverables
Design & Setup ____ IRB approval, instrument development Pre-analysis plan
Baseline ____ Data collection, randomization Baseline report
Implementation ____ Program rollout, monitoring Implementation report
Follow-up ____ Endline data collection Clean dataset
Analysis ____ Statistical analysis, report writing Final report

Budget Estimation

Major cost categories and estimates:

Risk Management

What could go wrong and how will you address it?

Stakeholder Engagement

How will you involve implementers and policymakers?

9 Final Quality Check

✅ Design Quality Assessment
Clear Research Question: Specific, policy-relevant causal question
Credible Identification: Plausible strategy for causal inference
Adequate Power: Sufficient sample size to detect meaningful effects
Threat Mitigation: Major validity concerns addressed
Feasible Implementation: Realistic timeline and budget
Ethical Compliance: Meets standards for research with human subjects
Pre-specified Analysis: Clear plan registered before data collection

Overall Design Assessment

Summarize the strengths and limitations of your design:

Dissemination Plan

How will you share results with relevant audiences?
📚 Additional Resources